The Uncertainty Matrix

The uncertainty matrix is an instrument for generating an overview of where one expects the most important (policy-relevant) uncertainties to be located, and how these can be further characterized in terms of a number of uncertainty dimensions. Using the matrix can serve as a first step towards a more elaborate uncertainty assessment, in which the size of uncertainties and their impact on the policy-relevant conclusions is explicitly assessed. The matrix features five principal dimensions of uncertainty, ‘location’, ‘level of uncertainty’, ‘nature of uncertainty’, ‘qualification of knowledge base’ and ‘value-ladenness of choices’, which will be subsequently explained.

 


Uncertainty Matrix

 

(i) The dimension ‘location’ indicates where uncertainty can manifest itself in the problem configuration at hand. Five categories are distinguished along this dimension:

  • The ‘context’ concerns the framing of the problem, including the choices determining what is considered inside and outside the system boundaries (‘delineation of the system and its environment’), as well as the completeness of this representation in view of the problem issues at hand. Part of these context-related choices is also reflected in the other location categories, such as ‘data’ which are considered to play a role, ‘models’ which are chosen to be used, and ‘outcomes’ which are taken to be of interest.
  • ‘Data’ refers to measurements, monitoring data, survey data, etc. used in the study, that is, information which is directly based on empirical research and data collection. Also the data which are used for calibration of the models involved are included in this category.
  • ‘Model’3 concerns the ‘model instruments’ which are employed for the study. We define ‘models’ in a broad sense: a model is a (material) representation of an idea, object, process or mental construct. A model can exist solely in the human mind (mental, conceptual model), or be a physical representation of a larger object (physical scale model), or be a more quantitative description, using mathematical concepts and computers (mathematical and computer model). This category can encompass a broad spectrum of models, ranging from mental and conceptual models to moremathematical models (statistical models, causal process models, etc.) which are often implemented as computer models. Especially for the latter class of models subcategories have been introduced, distinguishing between model structure (relations), model parameters (e.g., process parameters, initial and boundary conditions), model inputs (input data, external driving forces), as well as the technical model, which refers to the implementation in hardware and software.
  • ‘Expert judgement’ refers to those specific contributions to the assessment that are not fully covered by context, models and data,
    and that typically have a more qualitative, reflective, and interpretative character. As such, this input could also alternatively be
    viewed as part of the ‘mental model’.
  • The category of ‘outputs’ from a study refers to the outcomes, indicators, propositions or statements which are of interest in the
    context of the problem at hand.

 

Remark: Notice that ‘scenarios’ in a broad sense have not been included as a separate category on the location axis. In fact they show up at different locations, e.g., as part of the context, model structure, model input scenario and expert judgement.

 

(ii) The dimension ‘level of uncertainty’ expresses how a specific uncertainty source can be classified on a gradual scale running from ‘knowing for certain’ to ‘no know’. Use is made of three distinct classes:

  • ‘Statistical uncertainty’: this concerns the uncertainties which can adequately be expressed in statistical terms, e.g., as a range with associated probability (examples are statistical expressions for measurement inaccuracies, uncertainties due to sampling effects, uncertainties in model-parameter estimates, etc.). In the natural sciences, scientists generally refer to this category if they speak of uncertainty, thereby often implicitly assuming that the involved model relations offer adequate descriptions of the real system under study, and that the (calibration)-data employed are representative of the situation under study. However, when this is not the case, ‘deeper’ forms of uncertainty are at play, which can surpass the statistical uncertainty in size and seriousness and which require adequate attention.
  • ‘Scenario uncertainty’: this concerns uncertainties which cannot be adequately depicted in terms of chances or probabilities, but which can only be specified in terms of (a range of) possible outcomes. For these uncertainties it is impossible to specify a degree of probability or belief, since the mechanisms which lead to the outcomes are not sufficiently known. Scenario uncertainties are often construed in terms of ‘what-if’ statements.
  • ‘Recognized ignorance’: this concerns those uncertainties of which we realize – some way or another – that they are present, but of which we cannot establish any useful estimate, e.g., due to limits to predictability and knowability (‘chaos’) or due to unknown
    processes.

 

Continuing on the scale beyond recognized ignorance, we arrive in the area of complete ignorance (‘unknown unknowns’) of which we cannot yet speak and where we inevitably grope in the dark.

We should notice that the uncertainties which manifest themselves at a specific location (e.g., uncertainties on model relations) can appear in each of the above-mentioned guises: while some aspects can adequately be expressed in statistical terms, other aspects can often only be expressed in terms of ‘what-if’ statements; moreover, there are typically aspects judged relevant but about which we know that we are (still) largely ‘ignorant’. Judging which aspects manifests themselves in what forms is often a subjective (and uncertain) matter.

(iii) The third characteristic dimension, ‘nature of uncertainty’, expresses whether uncertainty is primarily a consequence of the incompleteness and fallibility of knowledge (‘knowledge-related’, or ‘epistemic’, uncertainty) or that it is primarily due to the intrinsic indeterminate and/or variable character of the system under study (‘variability-related’, or ‘ontic’, uncertainty).

  • Knowledge-related uncertainty can possibly, though not necessarily, be reduced by means of more measurements, better
    models and/or more knowledge. However, it is also possible that this knowledge-related uncertainty is increased by doing more research and by the progress of insight.
  • Variability-related uncertainty is typically not reducible by means of more research (e.g., inherent indeterminacy and/or unpredictability, randomness, chaotic behavior. Although it is possible to know the characteristics of a system on a certain level of
    aggregation, e.g., knowing the probability distribution or the ‘strange attractor’, it is not always possible to predict the behaviour or properties of individuals/elements which form part of the system on a lower level.).

 

Remark: In many situations uncertainty manifests itself as a mix of both forms; not in all cases the delineation between ‘epistemic’ and ‘ontic’ can be made unequivocally. Moreover, a combination of taste, tradition, specific problem features that are of interest, and the current level of knowledge and ignorance with respect to the specific subject determines to a large part where the dividing line is drawn. In practice it is therefore the active choice of the researcher which often determines the distinction between epistemic and ontic, rather than that it is an innate and fundamental property of reality itself. Notice that this choice can be decisive for the outcomes and interpretations of the uncertainty assessment. Still, using the distinction between ‘epistemic’ and ‘ontic’ uncertainty can render important information on the (im)possibility of reducing the uncertainties by, e.g., more research, better measurements, better models, etc. That is, although not being completely equivalent, this distinction reflects to a large extent the distinction between uncertainties which are ‘reducible’ and those which are ‘not reducible’ by means of further research.

 

(iv) The fourth dimension which is relevant for characterizing uncertainty concerns the ‘qualification of the knowledge base’. This refers to the degree of underpinning of the established results and statements. The phrase ‘established results and statements’ can be interpreted in a broad sense here: it can refer to the policy-advice statement as such (e.g., ‘the norm will still be exceeded when the proposed policy measures have become effective’, ‘the total yearly emission of substance A is X kiloton’) as well as to assertions about the uncertainty in this statement (e.g. ‘the uncertainty in the total yearly emission of substance A is . . . (95% confidence interval)’). The degree of underpinning is divided into three classes: weak/fair/strong. If the underpinning is weak, this indicates that the statement of concern is surrounded by much (knowledge-related) uncertainty, and deserves further attention. This classification moreover offers suggestions about the extent to which uncertainty is reducible by providing a better underpinning.

Notice that this dimension in fact characterizes the reliability of the information (data, knowledge, methods, argumentations, etc.) which is used in the assessment. Criteria such as empirical, theoretical or methodological underpinning and acceptance/support within and outside the peer community can be used for assessing and expressing the level of reliability. If required, a so-called ‘pedigree analysis’ can be done, which results in a semi-quantitative scoring of the underpinning on the basis of a number of qualitative criteria such as the aforementioned ones.

(v) The final dimension for characterizing uncertainties denotes whether a substantial amount of ‘value-ladenness’ and subjectiveness is involved in making the various – implicit and explicit – choices during the environmental assessment. This concerns, among other things, the way in which (i) the problem is framed vis à vis the various views and perspectives on the problem, (ii) the knowledge and information (data, models) is selected and applied, (iii) the explanations and conclusions are expressed and formulated. If the value-ladenness is high for relevant parts of the assessment, then it is imperative to analyze whether or not the results of the study are highly influenced by the choices involved, and whether this could lead to a certain arbitrariness, ambiguity or uncertainty of the policy-relevant conclusions. This could then be a reason to explicitly deal with different views and perspectives in the assessment and to discuss the scope and robustness of the conclusions in an explicit manner. In order to identify value-ladenness one could, e.g., use §§1 and 2 of the Detailed Guidance

 

Instructions for Filling Out the Uncertainty Matrix

1. Indicate in the uncertainty matrix (table 1a) where the most relevant uncertainties or uncertainty sources are to be expected:

  • Indicate first in which row of the matrix the uncertainty source is located (location dimension).
  • Subsequently, further characterize the uncertainty source by use of the columns (representing four uncertainty dimensions other than
    location).
  • While doing this, use an ‘ABC’ coding to indicate the relevance of the specific uncertainty sources (do not fill in anything if the source
    is considered hardly important or unimportant):

A= of crucial importance
B= important
C= of medium importance

By attaching an index to this coding – e.g., A1, B1, C1, A2, B2, C2, etc. – one can kan explicitly indicate to which uncertainty source the coding refers (e.g., index 1 refers to source 1, index 2 to source 2, etc.). Notice that a specific source of uncertainty can appear at different points in the matrix, dependent on how the source manifests itself and how it can be characterized (see sub [B] below for more explanation).

2. Construct a table with two columns: "brief description of uncertainty source" and "Explanation and justification of the specifications given in the matrix". With this table, briefly describe each uncertainty source, and explain or motivate the specifications given in the uncertainty matrix (e.g., concerning the location and further uncertainty characterisation, and concerning the ABC code scored), adding references to the literature, if deemed appropriate.

 

References

W.E. Walker, P. Harremoës, J. Rotmans, J.P. van der Sluijs, M.B.A. van Asselt, P. Janssen, and M.P. Krayer von Krauss (2003), Defining Uncertainty A Conceptual Basis for Uncertainty Management in Model-Based Decision Support, Integrated Assessment, 4 (1), 5-17.

Janssen, P.H.M., Petersen, A.C., Van der Sluijs, J.P., Risbey, J., Ravetz, J.R. (2005), A guidance for assessing and communicating uncertainties. Water science and technology, 52 (6) 125–131

J.P. van der Sluijs, A.C. Petersen, P.H.M. Janssen, James S Risbey and Jerome R. Ravetz (2008) Exploring the quality of evidence for complex and contested policy decisions, Environmental Research Letters, 3 024008 (9pp)